47 research outputs found

    RF and Network Signature-based Machine Learning on Detection of Wireless Controlled Drone

    Get PDF
    Over the years, drone usage have become an increasing part of the ever-connected society that we are currently living in. Its usages have proliferated beyond the military sector to various commercial and consumer activities such as package delivery, disaster relief, agriculture and ¯lming. With the rise of Wi-Fi controlled drone. Wi-Fi controlled drone has increased its popularity for personal use due to its affordability, and the ease of operating the drone through smart-devices like mobile phone, tablets and computers. As such, this increases the likelihood of drone presence in various environments, especially in critical government infrastructure, leading to various privacy and security concern by the authorities and the public with malicious intent. Therefore, various signature-based methodology of drone detection has emerged such as the visual and Radio Frequency (RF) signature-based detection method. Visual signature-based detection relies on camera capture and image processing but this is an expensive approach. Whereas, RF signature-based detection relies on the identi¯cation of the emission of RF signal by the drone. However, since most commercial electronics devices were built based on Wi-Fi technology, the differentiation of the RF signals transmitted between a drone or a standard Wi-Fi device in a crowded Wi-Fi environment such as a school campus or city area is an challenging task. In this paper, we propose a novel machine learning approach that leverages on the identified unique signatures of Wi-Fi devices in terms of Radio Frequency (RF) and network packets mea- surement to differentiate the presence of Wi-Fi drone and standard Wi-Fi devices in an urban setting. Furthermore, we also carried out a meticulous pre-processing procedure and a better training scheme of using Stratified K-Fold Cross-Validation (SKFCV), to enhance the richness in the data signature and fully exploit the permutation of the data during training respectively for better performance of the ML models. Two supervised classi¯cation Machine Learning (ML) models, namely the Logistic Regression (LR), and Artificial Neural Network (ANN) were applied using the joint data measurements to identify the presence of drone in dense Wi-Fi environment. The experimental results have shown that the proposed novel ML approach of using both RF and network measurement signatures coupled with the pre-processing and training methodology on LR and ANN ML models have outperformed the traditional RF signature-based drone detection ML accuracy results by 15.1% and 21.63% respectively in a crowded Wi-Fi environment

    TOA-based indoor localization and tracking with inaccurate floor plan map via MRMSC-PHD filter

    Get PDF
    This paper proposes a novel indoor localization scheme to jointly track a mobile device (MD) and update an inaccurate floor plan map using the time-of-arrival measured at multiple reference devices (RDs). By modeling the floor plan map as a collection of map features, the map and MD position can be jointly estimated via a multi-RD single-cluster probability hypothesis density (MSC-PHD) filter. Conventional MSC-PHD filters assume that each map feature generates at most one measurement for each RD. If single reflections of the detected signal are considered as measurements generated by map features, then higher-order reflections, which also carry information on the MD and map features, must be treated as clutter. The proposed scheme incorporates multiple reflections by treating them as virtual single reflections reflected from inaccurate map features and traces them to the corresponding virtual RDs (VRDs), referred to as a multi-reflection-incorporating MSC-PHD (MRMSC-PHD) filter. The complexity of using multiple reflection paths arises from the inaccuracy of the VRD location due to inaccuracy in the map features. Numerical results show that these multiple reflection paths can be modeled statistically as a Gaussian distribution. A computationally tractable implementation combining a new greedy partitioning scheme and a particle-Gaussian mixture filter is presented. A novel mapping error metric is then proposed to evaluate the estimated map's accuracy for plane surfaces. Simulation and experimental results show that our proposed MRMSC-PHD filter outperforms the existing MSC-PHD filters by up to 95% in terms of average localization and by up to 90% in terms of mapping accuracy

    An Indoor Localization and Tracking System Using Successive Weighted RSS Projection

    Get PDF
    This letter proposes a novel successive weighted received signal strength (RSS) indoor localization and tracking system that projects previous time instance estimated mobile device (MD) position to provide projected RSS values. Such RSS projection increases the number of available RSS from Nm to Nm + N AP , where N AP is the total number of access points and Nm is the number of RSS values measured by MD, ranging from 0 to N AP . Our proposed system thus resolves the issues associated with insufficient or no RSS values received by MD. Inertial navigation system (INS) is merged with RSS localization system to provide a weighted fusion of projected and measured RSS values. The weighting factors are derived based on the INS and RSS localization accuracy where the former is initially accurate but deteriorates with time and the latter is time-independent but environment-dependent. The proposed system was tested in indoor environments and outperformed other existing localization systems such as RSS and INS fusion using extended Kalman filter and non-line-of-sight (NLOS) selection scheme, especially in heavy multipath environment, by 42% and 75%, respectively

    Localization in GPS denied environment

    Get PDF
    No abstract available

    5G Radar and Wi-Fi Based Machine Learning on Drone Detection and Localization

    Get PDF
    Drone usages have been proliferating for various government initiatives, commercial benefits and civilian leisure purposes. Drone mismanagement especially civilian usage drones can easily expose threat and vulnerability of the Government Public Key Infrastructures (PKI) that hold crucial operations, affecting the survival and economic of the country. As such, detection and location identification of these drones are crucial immediately prior to their payload action. Existing drone detection solutions are bulky, expensive and hard to setup in real time. With the advent of 5G and Internet of Things (IoT), this paper proposes a cost effective bistatic radar solution that leverages on 5G cellular spectrum to detect the presence and localize the drone. Coupled with K-Nearest Neighbours (KNN) Machine Learning (ML) algorithm, the features of Non- Line of Sight (NLOS) transmissions by 5G radar and Received Signal Strength Indicator (RSSI) emitted by drone are used to predict the location of the drone. The proposed 5G radar solution can detect the presence of a drone in both outdoor and indoor environment with accuracy of 100%. Furthermore, it can localize the drone with an accuracy of up to 75%. These results have shown that a cost effective radar machine learning system, operating on the 5G cellular network spectrum can be developed to successfully identify and locate a drone in real-time

    Big Data Scenarios Simulator for Deep Learning Algorithm Evaluation for Autonomous Vehicle

    Get PDF
    One of the challenges in developing autonomous vehicles (AV) is the collection of suitable real environment data for the training and evaluation of machine learning algorithms for autonomous vehicles. Such environment data collection via various sensors mounted on AV is big data in nature which require massive time and money investment and in some specific scenarios could pose a significant danger to human lives. This necessitates the virtual scenarios simulator to simulate the real environment by generating big data images from a virtual fisheye lens that can mimic the field of view and radial distortion of commercial available camera lens of any manufacturer and model. In this paper, we proposed the novelty of developing a fisheye lens with distortion system to generate big data scenarios images to train and test imaged based sensing functions and to evaluate scenarios according to EuroNCap standards. A total of 10,123 RGB, depth and segmentation images of varying road scenarios were generated by proposed system in approximately 14 hours as compared to existing methods of 20 hours, achieving 42.86% improvement

    An indoor UWB 3D positioning method for coplanar base stations

    Get PDF
    As an indispensable type of information, location data are used in various industries. Ultrawideband (UWB) technology has been used for indoor location estimation due to its excellent ranging performance. However, the accuracy of the location estimation results is heavily affected by the deployment of base stations; in particular, the base station deployment space is limited in certain scenarios. In underground mines, base stations must be placed on the roof to ensure signal coverage, which is almost coplanar in nature. Existing indoor positioning solutions suffer from both difficulties in the correct convergence of results and poor positioning accuracy under coplanar base-station conditions. To correctly estimate position in coplanar base-station scenarios, this paper proposes a novel iterative method. Based on the Newton iteration method, a selection range for the initial value and iterative convergence control conditions were derived to improve the convergence performance of the algorithm. In this paper, we mathematically analyze the impact of the localization solution for coplanar base stations and derive the expression for the localization accuracy performance. The proposed method demonstrated a positioning accuracy of 5 cm in the experimental campaign for the comparative analysis, with the multi-epoch observation results being stable within 10 cm. Furthermore, it was found that, when base stations are coplanar, the test point accuracy can be improved by an average of 63.54% compared to the conventional positioning algorithm. In the base-station coplanar deployment scenario, the upper bound of the CDF convergence in the proposed method outperformed the conventional positioning algorithm by about 30%

    UWB sensor based indoor LOS/NLOS localization with support vector machine learning

    Get PDF
    Ultra-wideband (UWB) sensor technology is known to achieve high-precision indoor localization accuracy in line-of-sight (LOS) environments, but its localization accuracy and stability suffer detrimentally in non-line-of-sight (NLOS) conditions. Current NLOS/LOS identification based on channel impulse response’s (CIR) characteristic parameters (CCP) improves location accuracy, but most CIR-based identification approaches did not sufficiently exploit the CIR information and are environment specific. This paper derives three new CCPs and proposes a novel two-step identification/classification methodology with dynamic threshold comparison (DTC) and the fuzzy credibility-based support vector machine (FC-SVM). The proposed SVM based classification methodology leverages on the derived CCPs obtained from the waveform and its channel analysis, which are more robust to environment and obstacles dynamic. This is achieved in two-step with a coarse-grained NLOS/LOS identification with the DTC strategy followed by FC-SVM to give the fine-grained result. Finally, based on the obtained identification results, a real-time ranging error mitigation strategy is then designed to improve the ranging and localization accuracy. Extensive experimental campaigns are conducted in different LOS/NLOS scenarios to evaluate the proposed methodology. The results show that the mean LOS/NLOS identification accuracy in various testing scenarios is 93.27 %, and the LOS and NLOS recalls are 94.27 % and 92.57 %, respectively. The ranging errors in LOS(NLOS) conditions are reduced from 0.106 m(1.442 m) to 0.065 m(0.739 m), demonstrating an improvement of 38.85 %(48.74 %) with 0.041 m(0.703 m) error reduction. On the other hand, the average positioning accuracy is also reduced from 0.250 m to 0.091 m with an improvement of 63.49 %(0.159 m), which outperforms the state-of-the-art approaches of the Least-squares support vector machine (LS-SVM) and K-Nearest Neighbor (KNN) algorithms

    Learners’ differences in blended learner-centric approach for a common programming subject

    Get PDF
    As the number of students entering higher education increases with a growing diversity of background, educators of programming courses face increasing challenges. Different teaching pedagogies need to be explored for students with different background knowledge. Some students find programming courses difficult to understand and practice. It may lead to de-motivation and disengagement in learning process with consequential impact on their grades. Addressing these issues demands approaches for effective teaching programming courses to multidisciplinary cohorts. This article investigates how computing science (CS) and engineering cohorts respond differently to teaching approaches in a common module, Fundamentals of Programming. Both traditional teacher-centric teaching and a blended learner-centric approach have been explored in a diverse group of students. The blended learner-centric approach combines classroom teaching and self-paced blended learning using work examples videos method. These two teaching approaches have been evaluated in Academic Year 2019/2020. It can be seen from the evaluation results of 92 CS and 150 Engineering students who participate in this research that the performance is improved by about 5% through blended learner-centric approach. It is further observed that quantitatively the performance gap between CS and Engineering students has been reduced. Questionnaire survey has also been conducted with 54 CS and 89 Engineering students being responded. The learners’ perceptions of the blended learner-centric approach have also been compared between these two cohorts

    Non-verbal auditory aspects of human-service robot interaction

    Get PDF
    As service robots become ever more pervasive, the number, degree and depth of interaction with humans, particularly fellow workers, is increasing rapidly. Humans are generally shaped alike, respond in predominantly similar ways and are often inherently predictable to other humans. Robots, by contrast, have an exceptional diversity of size, shape, mobility, function, and their intentions or actions are often less predictable. Humans working in close proximity have learnt to provide cues to their behaviour, both verbal and non-verbal, and we argue that this is an important aspect of maintaining both safety and comfort in a mixed work or social environment. At present, robots do not provide any such cues to their fellow workers, which can be cause of human discomfort, and indeed contribute to safety issues when working in close proximity to humans. This paper considers the non-verbal auditory aspects of interaction in a work environment, with particular emphasis on safe and comfortable integration of service robots into such locations. In particular, we propose a classification of interaction levels to inform the construction, programming and operation of robots in the workplace
    corecore